skip to main content


Search for: All records

Creators/Authors contains: "Rao, Rajesh P. N."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Humans frequently interact with agents whose intentions can fluctuate between competition and cooperation over time. It is unclear how the brain adapts to fluctuating intentions of others when the nature of the interactions (to cooperate or compete) is not explicitly and truthfully signaled. Here, we use model-based fMRI and a task in which participants thought they were playing with another player. In fact, they played with an algorithm that alternated without signaling between cooperative and competitive strategies. We show that a neurocomputational mechanism with arbitration between competitive and cooperative experts outperforms other learning models in predicting choice behavior. At the brain level, the fMRI results show that the ventral striatum and ventromedial prefrontal cortex track the difference of reliability between these experts. When attributing competitive intentions, we find increased coupling between these regions and a network that distinguishes prediction errors related to competition and cooperation. These findings provide a neurocomputational account of how the brain arbitrates dynamically between cooperative and competitive intentions when making adaptive social decisions.

     
    more » « less
  2. Abstract

    Intracortical microstimulation (ICMS) is commonly used in many experimental and clinical paradigms; however, its effects on the activation of neurons are still not completely understood. To document the responses of cortical neurons in awake nonhuman primates to stimulation, we recorded single-unit activity while delivering single-pulse stimulation via Utah arrays implanted in primary motor cortex (M1) of three macaque monkeys. Stimuli between 5 and 50 μA delivered to single channels reliably evoked spikes in neurons recorded throughout the array with delays of up to 12 ms. ICMS pulses also induced a period of inhibition lasting up to 150 ms that typically followed the initial excitatory response. Higher current amplitudes led to a greater probability of evoking a spike and extended the duration of inhibition. The likelihood of evoking a spike in a neuron was dependent on the spontaneous firing rate as well as the delay between its most recent spike time and stimulus onset. Tonic repetitive stimulation between 2 and 20 Hz often modulated both the probability of evoking spikes and the duration of inhibition; high-frequency stimulation was more likely to change both responses. On a trial-by-trial basis, whether a stimulus evoked a spike did not affect the subsequent inhibitory response; however, their changes over time were often positively or negatively correlated. Our results document the complex dynamics of cortical neural responses to electrical stimulation that need to be considered when using ICMS for scientific and clinical applications.

     
    more » « less
  3. Abstract

    Tracking an odour plume to locate its source under variable wind and plume statistics is a complex task. Flying insects routinely accomplish such tracking, often over long distances, in pursuit of food or mates. Several aspects of this remarkable behaviour and its underlying neural circuitry have been studied experimentally. Here we take a complementary in silico approach to develop an integrated understanding of their behaviour and neural computations. Specifically, we train artificial recurrent neural network agents using deep reinforcement learning to locate the source of simulated odour plumes that mimic features of plumes in a turbulent flow. Interestingly, the agents’ emergent behaviours resemble those of flying insects, and the recurrent neural networks learn to compute task-relevant variables with distinct dynamic structures in population activity. Our analyses put forward a testable behavioural hypothesis for tracking plumes in changing wind direction, and we provide key intuitions for memory requirements and neural dynamics in odour plume tracking.

     
    more » « less
  4. Abstract

    Objective.Recent advances in neural decoding have accelerated the development of brain–computer interfaces aimed at assisting users with everyday tasks such as speaking, walking, and manipulating objects. However, current approaches for training neural decoders commonly require large quantities of labeled data, which can be laborious or infeasible to obtain in real-world settings. Alternatively, self-supervised models that share self-generated pseudo-labels between two data streams have shown exceptional performance on unlabeled audio and video data, but it remains unclear how well they extend to neural decoding.Approach.We learn neural decoders without labels by leveraging multiple simultaneously recorded data streams, including neural, kinematic, and physiological signals. Specifically, we apply cross-modal, self-supervised deep clustering to train decoders that can classify movements from brain recordings. After training, we then isolate the decoders for each input data stream and compare the accuracy of decoders trained using cross-modal deep clustering against supervised and unimodal, self-supervised models.Main results.We find that sharing pseudo-labels between two data streams during training substantially increases decoding performance compared to unimodal, self-supervised models, with accuracies approaching those of supervised decoders trained on labeled data. Next, we extend cross-modal decoder training to three or more modalities, achieving state-of-the-art neural decoding accuracy that matches or slightly exceeds the performance of supervised models.Significance.We demonstrate that cross-modal, self-supervised decoding can be applied to train neural decoders when few or no labels are available and extend the cross-modal framework to share information among three or more data streams, further improving self-supervised training.

     
    more » « less
  5. Abstract

    Understanding the neural basis of human movement in naturalistic scenarios is critical for expanding neuroscience research beyond constrained laboratory paradigms. Here, we describe ourAnnotated Joints in Long-term Electrocorticography for 12 human participants(AJILE12) dataset, the largest human neurobehavioral dataset that is publicly available; the dataset was recorded opportunistically during passive clinical epilepsy monitoring. AJILE12 includes synchronized intracranial neural recordings and upper body pose trajectories across 55 semi-continuous days of naturalistic movements, along with relevant metadata, including thousands of wrist movement events and annotated behavioral states. Neural recordings are available at 500 Hz from at least 64 electrodes per participant, for a total of 1280 hours. Pose trajectories at 9 upper-body keypoints were estimated from 118 million video frames. To facilitate data exploration and reuse, we have shared AJILE12 on The DANDI Archive in the Neurodata Without Borders (NWB) data standard and developed a browser-based dashboard.

     
    more » « less
  6. Abstract

    In perceptual decisions, subjects infer hidden states of the environment based on noisy sensory information. Here we show that both choice and its associated confidence are explained by a Bayesian framework based on partially observable Markov decision processes (POMDPs). We test our model on monkeys performing a direction-discrimination task with post-decision wagering, demonstrating that the model explains objective accuracy and predicts subjective confidence. Further, we show that the model replicates well-known discrepancies of confidence and accuracy, including the hard-easy effect, opposing effects of stimulus variability on confidence and accuracy, dependence of confidence ratings on simultaneous or sequential reports of choice and confidence, apparent difference between choice and confidence sensitivity, and seemingly disproportionate influence of choice-congruent evidence on confidence. These effects may not be signatures of sub-optimal inference or discrepant computational processes for choice and confidence. Rather, they arise in Bayesian inference with incomplete knowledge of the environment.

     
    more » « less
  7. Abstract

    Objective. Advances in neural decoding have enabled brain-computer interfaces to perform increasingly complex and clinically-relevant tasks. However, such decoders are often tailored to specific participants, days, and recording sites, limiting their practical long-term usage. Therefore, a fundamental challenge is to develop neural decoders that can robustly train on pooled, multi-participant data and generalize to new participants.Approach. We introduce a new decoder, HTNet, which uses a convolutional neural network with two innovations: (a) a Hilbert transform that computes spectral power at data-driven frequencies and (b) a layer that projects electrode-level data onto predefined brain regions. The projection layer critically enables applications with intracranial electrocorticography (ECoG), where electrode locations are not standardized and vary widely across participants. We trained HTNet to decode arm movements using pooled ECoG data from 11 of 12 participants and tested performance on unseen ECoG or electroencephalography (EEG) participants; these pretrained models were also subsequently fine-tuned to each test participant.Main results. HTNet outperformed state-of-the-art decoders when tested on unseen participants, even when a different recording modality was used. By fine-tuning these generalized HTNet decoders, we achieved performance approaching the best tailored decoders with as few as 50 ECoG or 20 EEG events. We were also able to interpret HTNet’s trained weights and demonstrate its ability to extract physiologically-relevant features.Significance. By generalizing to new participants and recording modalities, robustly handling variations in electrode placement, and allowing participant-specific fine-tuning with minimal data, HTNet is applicable across a broader range of neural decoding applications compared to current state-of-the-art decoders.

     
    more » « less
  8. Abstract

    We present BrainNet which, to our knowledge, is the first multi-person non-invasive direct brain-to-brain interface for collaborative problem solving. The interface combines electroencephalography (EEG) to record brain signals and transcranial magnetic stimulation (TMS) to deliver information noninvasively to the brain. The interface allows three human subjects to collaborate and solve a task using direct brain-to-brain communication. Two of the three subjects are designated as “Senders” whose brain signals are decoded using real-time EEG data analysis. The decoding process extracts each Sender’s decision about whether to rotate a block in a Tetris-like game before it is dropped to fill a line. The Senders’ decisions are transmitted via the Internet to the brain of a third subject, the “Receiver,” who cannot see the game screen. The Senders’ decisions are delivered to the Receiver’s brain via magnetic stimulation of the occipital cortex. The Receiver integrates the information received from the two Senders and uses an EEG interface to make a decision about either turning the block or keeping it in the same orientation. A second round of the game provides an additional chance for the Senders to evaluate the Receiver’s decision and send feedback to the Receiver’s brain, and for the Receiver to rectify a possible incorrect decision made in the first round. We evaluated the performance of BrainNet in terms of (1) Group-level performance during the game, (2) True/False positive rates of subjects’ decisions, and (3) Mutual information between subjects. Five groups, each with three human subjects, successfully used BrainNet to perform the collaborative task, with an average accuracy of 81.25%. Furthermore, by varying the information reliability of the Senders by artificially injecting noise into one Sender’s signal, we investigated how the Receiver learns to integrate noisy signals in order to make a correct decision. We found that like conventional social networks, BrainNet allows Receivers to learn to trust the Sender who is more reliable, in this case, based solely on the information transmitted directly to their brains. Our results point the way to future brain-to-brain interfaces that enable cooperative problem solving by humans using a “social network” of connected brains.

     
    more » « less
  9. Abstract

    Direct cortical stimulation (DCS) of primary somatosensory cortex (S1) could help restore sensation and provide task-relevant feedback in a neuroprosthesis. However, the psychophysics of S1 DCS is poorly studied, including any comparison to cutaneous haptic stimulation. We compare the response times to DCS of human hand somatosensory cortex through electrocorticographic grids with response times to haptic stimuli delivered to the hand in four subjects. We found that subjects respond significantly slower to S1 DCS than to natural, haptic stimuli for a range of DCS train durations. Median response times for haptic stimulation varied from 198 ms to 313 ms, while median responses to reliably perceived DCS ranged from 254 ms for one subject, all the way to 528 ms for another. We discern no significant impact of learning or habituation through the analysis of blocked trials, and find no significant impact of cortical stimulation train duration on response times. Our results provide a realistic set of expectations for latencies with somatosensory DCS feedback for future neuroprosthetic applications and motivate the study of neural mechanisms underlying human perception of somatosensation via DCS.

     
    more » « less